Conversation
- Added execution timeout configuration in `api.php`. - Implemented non-blocking process execution with `stream_select`. - Added timeout mechanism for `proc_open` calls. - Implemented output size limits (10MB for stdout, 1MB for stderr). - Added stderr capture and sanitized logging for MCP tool execution. - Ensured pipes are drained after process termination.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Summary of ChangesHello @Snider, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the security and stability of the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
QA Pipeline Results
Artifacts
Generated by core php qa pipeline |
There was a problem hiding this comment.
Code Review
This pull request introduces crucial security enhancements to the executeToolViaArtisan method by implementing timeouts and output size limits, effectively mitigating potential resource exhaustion and denial-of-service vulnerabilities. The switch to non-blocking I/O with stream_select is a solid approach for handling process execution safely. My review includes a few suggestions to further improve the robustness and maintainability of the new implementation.
| // Read remaining output | ||
| $output .= stream_get_contents($pipes[1]); | ||
| $errorOutput .= stream_get_contents($pipes[2]); | ||
| break; |
There was a problem hiding this comment.
When the process terminates, the remaining output is read from the pipes using stream_get_contents. However, the size of the output read at this stage is not checked against the defined limits. This could allow a process to bypass the size restrictions by outputting a large amount of data just before exiting. You should add the size limit checks after these final reads and before breaking the loop.
// Read remaining output
$output .= stream_get_contents($pipes[1]);
$errorOutput .= stream_get_contents($pipes[2]);
if (strlen($output) > 10 * 1024 * 1024) { // 10MB absolute max
throw new \RuntimeException('Output size limit exceeded');
}
if (strlen($errorOutput) > 1024 * 1024) { // 1MB max for stderr
throw new \RuntimeException('Error output size limit exceeded');
}
break;| $output .= fread($pipes[1], 1024 * 1024); | ||
| $errorOutput .= fread($pipes[2], 1024 * 64); |
There was a problem hiding this comment.
The code currently attempts to read from both stdout and stderr pipes on every loop iteration where stream_select doesn't fail. This is inefficient because it doesn't check which stream is actually ready for reading. You should use the $read array, which is modified by stream_select to contain only the ready streams, to conditionally read from the correct pipe.
if (in_array($pipes[1], $read, true)) {
$output .= fread($pipes[1], 1024 * 1024);
}
if (in_array($pipes[2], $read, true)) {
$errorOutput .= fread($pipes[2], 1024 * 64);
}| $output .= fread($pipes[1], 1024 * 1024); | ||
| $errorOutput .= fread($pipes[2], 1024 * 64); | ||
|
|
||
| if (strlen($output) > 10 * 1024 * 1024) { // 10MB absolute max | ||
| proc_terminate($process); | ||
| throw new \RuntimeException('Output size limit exceeded'); | ||
| } | ||
|
|
||
| if (strlen($errorOutput) > 1024 * 1024) { // 1MB max for stderr | ||
| proc_terminate($process); | ||
| throw new \RuntimeException('Error output size limit exceeded'); | ||
| } |
There was a problem hiding this comment.
This block of code uses several magic numbers for buffer chunk sizes and maximum output sizes (e.g., 1024 * 1024, 10 * 1024 * 1024). To improve readability and maintainability, it's recommended to define these values as named constants (e.g., as class constants like private const STDOUT_MAX_SIZE = 10 * 1024 * 1024;). This makes the code self-documenting and simplifies future modifications to these limits.
- Added `repositories` section to `composer.json` pointing to `host-uk/core-php`. - Added `require-dev` for `pestphp/pest`, `laravel/pint`, and `orchestra/testbench`. - Enabled `pestphp/pest-plugin` in `allow-plugins`. - Set `minimum-stability` to `dev` to allow `@dev` dependencies.
- Removed `composer.lock` which contained local absolute paths. - Switched `repositories` in `composer.json` from `path` to `vcs` type. - Added `host-uk/core-mcp` to requirements. - Configured VCS endpoints for `host-uk/core-php` and `host-uk/core-mcp`. - Maintained security fixes for MCP tool execution.
- Added `phpstan/phpstan`, `vimeo/psalm`, and `phpunit/phpunit` to `require-dev`. - Maintained previous fixes: VCS repositories for `host-uk/core` and `host-uk/core-mcp`. - Deleted `composer.lock` to ensure fresh resolution in CI.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
This PR addresses security concerns in the
McpApiController::executeToolViaArtisan()method where artisan commands were executed without timeouts or output sanitization.Key changes:
api.mcp.execution_timeout(default 30s) to prevent long-running processes from tying up server resources.Str::limit) for better debugging while maintaining security.stream_selectto safely handle the execution loop and ensure pipes are fully drained before closing.Fixes #5
PR created automatically by Jules for task 7311204295205447069 started by @Snider